Fast Stochastic Ordinal Embedding With Variance Reduction and Adaptive Step Size

نویسندگان

چکیده

Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years. Most of the existing methods are based on semi-definite programming (SDP), which is generally time-consuming and degrades scalability, especially confronting large-scale data. To overcome this challenge, we propose a stochastic algorithm SVRG-SBB, has following features: i) achieving good scalability via dropping positive (PSD) constraints as serving fast algorithm, i.e., variance reduced gradient (SVRG) method, ii) adaptive learning introducing new, step size stabilized Barzilai-Borwein (SBB) size. Theoretically, under some natural assumptions, show O (1/T) O(1T) rate convergence to stationary point proposed where T number total iterations. Under further Polyak-?ojasiewicz assumption, can global linear (i.e., exponentially converging optimum) algorithm. Numerous simulations real-world data experiments conducted effectiveness by comparing with state-of-the-art methods, notably, much lower computational cost prediction performance.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Non-convex Ordinal Embedding with Stabilized Barzilai-Borwein Step Size

Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years. Most of the existing methods are batch methods designed mainly based on the convex optimization, say, the projected gradient descent method. However, they are generally time-consuming due to that the singular value decomposition (SVD) is commonly adopted during t...

متن کامل

A stochastic gradient adaptive filter with gradient adaptive step size

This paper presents an adaptive step-size gradient adaptive filter. The step size of the adaptive filter is changed according to a gradient descent algorithm designed to reduce the squared estimation error during each iteration. An approximate analysis of the performance of the adaptive filter when its inputs are zero mean, white, and Gaussian and the set of optimal coefficients are time varyin...

متن کامل

Ordinal Embedding: Approximation Algorithms and Dimensionality Reduction

This paper studies how to optimally embed a general metric, represented by a graph, into a target space while preserving the relative magnitudes of most distances. More precisely, in an ordinal embedding, we must preserve the relative order between pairs of distances (which pairs are larger or smaller), and not necessarily the values of the distances themselves. The relaxation of an ordinal emb...

متن کامل

Accelerated Stochastic ADMM with Variance Reduction

Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM an...

متن کامل

Stochastic Conjugate Gradient Algorithm with Variance Reduction

Conjugate gradient methods are a class of important methods for solving linear equations and nonlinear optimization. In our work, we propose a new stochastic conjugate gradient algorithm with variance reduction (CGVR) and prove its linear convergence with the Fletcher and Revves method for strongly convex and smooth functions. We experimentally demonstrate that the CGVR algorithm converges fast...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Knowledge and Data Engineering

سال: 2021

ISSN: ['1558-2191', '1041-4347', '2326-3865']

DOI: https://doi.org/10.1109/tkde.2019.2956700